29 research outputs found
Efficient methods of automatic calibration for rainfall-runoff modelling in the Floreon+ system
Calibration of rainfall-runoff model parameters is an inseparable part of hydrological simulations. To achieve more accurate results of these simulations, it is necessary to implement an efficient calibration method that provides sufficient refinement of the model parameters in a reasonable time frame. In order to perform the calibration repeatedly for large amount of data and provide results of calibrated model simulations for the flood warning process in a short time, the method also has to be automated. In this paper, several local and global optimization methods are tested for their efficiency. The main goal is to identify the most accurate method for the calibration process that provides accurate results in an operational time frame (typically less than 1 hour) to be used in the flood prediction Floreon(+) system. All calibrations were performed on the measured data during the rainfall events in 2010 in the Moravian-Silesian region (Czech Republic) using our in-house rainfall-runoff model.Web of Science27441339
Precision-Aware application execution for Energy-optimization in HPC node system
Power consumption is a critical consideration in high performance computing
systems and it is becoming the limiting factor to build and operate Petascale
and Exascale systems. When studying the power consumption of existing systems
running HPC workloads, we find that power, energy and performance are closely
related which leads to the possibility to optimize energy consumption without
sacrificing (much or at all) the performance. In this paper, we propose a HPC
system running with a GNU/Linux OS and a Real Time Resource Manager (RTRM) that
is aware and monitors the healthy of the platform. On the system, an
application for disaster management runs. The application can run with
different QoS depending on the situation. We defined two main situations.
Normal execution, when there is no risk of a disaster, even though we still
have to run the system to look ahead in the near future if the situation
changes suddenly. In the second scenario, the possibilities for a disaster are
very high. Then the allocation of more resources for improving the precision
and the human decision has to be taken into account. The paper shows that at
design time, it is possible to describe different optimal points that are going
to be used at runtime by the RTOS with the application. This environment helps
to the system that must run 24/7 in saving energy with the trade-off of losing
precision. The paper shows a model execution which can improve the precision of
results by 65% in average by increasing the number of iterations from 1e3 to
1e4. This also produces one order of magnitude longer execution time which
leads to the need to use a multi-node solution. The optimal trade-off between
precision vs. execution time is computed by the RTOS with the time overhead
less than 10% against a native execution
HPC For DRM - Operational Flood Management In Urban Environment
A regional flood warning system based on a combination of data processing, modelling and communication tools is proposed. The system is founded on a common framework MIKE CUSTOMISED, giving great flexibility in tailoring solutions as needed. Where more conventional flood warning systems focus mainly on discharge predictions in the main rivers the proposed system considers the whole catchment area – flood plain, as well as tributaries. Local floods on smaller streams and tributaries may cause high damages, particular in urban areas. Such cases require timely reasonably accurate forecasts for proper decision making. The MIKE SHE modelling system is used to ensure simulation of flood danger map. It also provides runoff hydrographs for the detailed hydrodynamic models of the river channel network and flood plains based on 1D / 2D approximations (MIKE FLOOD). Generated flood maps are post processed and ported to the required forms and delivered via communication channels to users. Dissemination of results is done through web pages automatically maintained by the system. Whole forecast simulation should be run in frequency of tens of minutes, which is computational power and transmission capacity demanding. Adaptation of whole system to High Performance Computer solutions (HPC) is on-going issue. This allows also parallel variant computation and real-time probabilistic assessment. IT4Innovations National Supercomputing Centre is a research institute at the VŠB – Technical University of Ostrava. This centre provides perfect platform for applications of HPC for Disaster Risk Management (DRM) and its new dynamic development for real life applications, utilised for life protection and damage minimizing. The paper contributes both to the theory of application of HPC for standard hydrodynamic modelling but also to a real life application. A pilot operational Flood Risk Mapping project is developed for the capitol city of the Czech Republic
Efektivní řešení kontaktních úloh v inženýrství a biomechanice
Import 05/04/2007PrezenčníNeuveden
Úlohy kvadratického programování v řešení polohy svalů a výpočtu jejich aktivity
Mechanical analysis of the musculoskeletal system of the human is a subject of importance to fundamental research as well as to practical engineering design and medical applications. For instance, we may want to investigate the basic function of the body, or we may want to design an rehabilitation exercise, a man-driven or -operated machine or tool. In either of these cases and many others it will be a great asset to know motion and forces accurately during a given exercise. Computational models can assist us by quantifying certain parameters and estimating performance measures that cannot possibly be measured